#ChatGPT-Based Development Services
Explore tagged Tumblr posts
mobiloittetech ¡ 11 months ago
Text
How to Integrate ChatGPT with Your Website for Enhanced User Engagement
Tumblr media
In today's digital age, providing excellent customer support and engaging user experiences on websites is crucial for businesses. One way to enhance user interaction is by integrating ChatGPT, an AI-powered chatbot, into your website. ChatGPT can understand and respond to user queries in a conversational manner, creating a seamless and interactive experience. In this blog post, we will guide you through the process of integrating ChatGPT with your website, helping you unlock the power of AI-driven customer engagement.
1. Choose a ChatGPT Platform
There are several platforms available that provide ChatGPT services, such as OpenAI's ChatGPT API. Evaluate different platforms based on factors like pricing, ease of integration, scalability, and customization options. Select a platform that aligns with your specific requirements.
2. Obtain API Access
Sign up for the chosen ChatGPT platform and obtain API access. This typically involves creating an account, subscribing to a plan, and receiving an API key or credentials necessary for API integration.
3. Set up Server-Side Integration
To integrate ChatGPT with your website, you will need to set up server-side integration. This involves making API calls from your website's backend to the ChatGPT API. The exact implementation will depend on your server-side programming language or framework.
4. Implement User Interface
Design and implement the user interface for the chatbot on your website. This includes creating a chat widget or integrating the chatbot into existing chat or messaging systems. Customize the appearance and behavior of the chatbot to align with your website's branding and user experience.
5. Handle User Requests
When a user interacts with the chatbot on your website, capture their messages or queries and send them to the server-side code. Use the ChatGPT API to send these user messages as API requests and retrieve the responses.
6. Process Responses and Display
Once you receive the responses from the ChatGPT API, process them on the server-side code. You can handle intents, extract information, and perform any necessary business logic. Finally, send the processed response back to the user interface for display.
7. Enhance the Chatbot's Abilities
Continuously improve and enhance the capabilities of your ChatGPT integration. Experiment with different training approaches, fine-tune the chatbot's responses, and iterate based on user feedback. Regularly update and retrain the chatbot model to ensure it stays up-to-date and provides accurate and relevant responses.
8. Monitor and Evaluate Performance
Monitor the performance of your ChatGPT integration by analyzing user interactions, measuring response times, and tracking user satisfaction metrics. Collect feedback from users to identify areas for improvement and address any issues that arise.
Conclusion
Integrating ChatGPT with your website can significantly enhance user engagement and customer support. By following the steps outlined in this guide, you can seamlessly integrate ChatGPT into your website, providing users with a conversational and interactive experience. Remember to choose a reliable ChatGPT platform, set up server-side integration, implement the user interface, handle user requests, process and display responses, enhance the chatbot's abilities, and monitor its performance. With ChatGPT, you can take your website's user experience to the next level and deliver exceptional customer engagement.
0 notes
mobiloitteuk ¡ 1 year ago
Text
OpenAI ChatGPT Development by Mobiloitte UK
Introducing OpenAI ChatGPT – revolutionizing customer engagement. Harness the power of AI to enhance user interactions, drive conversions, and boost productivity. Empower your brand with Mobiloitte.uk's expertise in crafting seamless, intelligent solutions. Unleash ChatGPT's potential for a transformative digital experience – the future of innovation is now, with Mobiloitte.uk
0 notes
devstree ¡ 2 months ago
Text
Boost your business with Devstree’s ChatGPT integration services. We help you add smart AI features like chatbots, virtual assistants, and automated support into your website or app. Whether you want to improve customer service, save time with automation, or create a smarter user experience, our team makes the process easy and effective. Let us help you bring the power of ChatGPT to your business with simple, secure, and customized solutions.
1 note ¡ View note
andypantsx3 ¡ 12 days ago
Text
omg i'm sorry but i need to techsplain just one thing in the most doomer terms possible bc i'm scared and i need people to be too. so i saw this post which is like, a great post that gives me a little kick because of how obnoxious i find ai and how its cathartic to see corporate evil overlords overestimate themselves and jump the gun and look silly.
but one thing i don't think people outside of the industry understand is exactly how companies like microsoft plan on scaling the ability of their ai agents. as this post explains, they are not as advanced as some people make them out to be and it is hard to feed them the amount of context they need to perform some tasks well.
but what the second article in the above post explains is microsoft's investment in making a huge variety of the needed contexts more accessible to ai agents. the idea is like, only about 6 months old but what every huge tech firm right now is looking at is mcps (or model context protocols) which is a framework for standardizing how needed context is given to ai agents. to oversimplify an example, maybe an ai coding agent is trained on a zillion pieces of javacode but doesn't have insider knowledge of microsoft's internal application authoring processes, meta architecture, repositories, etc. an mcp standardizes how you would then offer those documents to the agent in a way that it can easily read and then use them, so it doesn't have to come pre-loaded with that knowledge. so it could tackle this developer's specific use case, if offered the right knowledge.
and that's the plan. essentially, we're going to see a huge boom in companies offering their libraries, services, knowledge bases (e.g. their bug fix logs) etc as mcps, and ai agents basically are going to go shopping amongst those contexts, plug into whatever the context is that they need for the task at hand, and then power up by like a bajillion percent on specific task they need to do.
so ai is powerful but not infallible right now, but it is going to scale pretty quickly i think.
in my opinion the only thing that is ever going to limit ai is not knowledge accessibility, but rather corporate greed. ai models are crazy expensive to train and maintain. every company on earth is also looking at how to optimize them to reduce some of that cost, and i think we will eventually see only a few megalith ais like chatgpt, with a bunch of smaller, more targeted models offered by other companies for them to leverage for specialized tasks.
i genuinely hope that the owners of the megalith models get so greedy that even the cost optimizations they are doing now don't bring down the price enough for their liking and they find shortcuts that ultimately make the models and the entire ecosystem shitty. but i confess i don't know enough about model optimization to know what is likely.
anyway i'm big scared and just wanted to put this slice of knowledge out there for people to be a little more informed.
58 notes ¡ View notes
mariacallous ¡ 8 months ago
Text
On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
The fabrications pose particular risks in health care settings. Despite OpenAI’s warnings against using Whisper for “high-risk domains,” over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children’s Hospital Los Angeles are among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.
Nabla acknowledges that Whisper can confabulate, but it also reportedly erases original audio recordings “for data safety reasons.” This could cause additional issues, since doctors cannot verify accuracy against the source material. And deaf patients may be highly impacted by mistaken transcripts since they would have no way to know if medical transcript audio is accurate or not.
The potential problems with Whisper extend beyond health care. Researchers from Cornell University and the University of Virginia studied thousands of audio samples and found Whisper adding nonexistent violent content and racial commentary to neutral speech. They found that 1 percent of samples included “entire hallucinated phrases or sentences which did not exist in any form in the underlying audio” and that 38 percent of those included “explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.”
In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”
An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.
Why Whisper Confabulates
The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, "Researchers aren’t certain why Whisper and similar tools hallucinate," but that isn't true. We know exactly why Transformer-based AI models like Whisper behave this way.
Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.
The transcription output from Whisper is a prediction of what is most likely, not what is most accurate. Accuracy in Transformer-based outputs is typically proportional to the presence of relevant accurate data in the training dataset, but it is never guaranteed. If there is ever a case where there isn't enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it “knows” about the relationships between sounds and words it has learned from its training data.
According to OpenAI in 2022, Whisper learned those statistical relationships from “680,000 hours of multilingual and multitask supervised data collected from the web.” But we now know a little more about the source. Given Whisper's well-known tendency to produce certain outputs like "thank you for watching," "like and subscribe," or "drop a comment in the section below" when provided silent or garbled inputs, it's likely that OpenAI trained Whisper on thousands of hours of captioned audio scraped from YouTube videos. (The researchers needed audio paired with existing captions to train the model.)
There's also a phenomenon called “overfitting” in AI models where information (in this case, text found in audio transcriptions) encountered more frequently in the training data is more likely to be reproduced in an output. In cases where Whisper encounters poor-quality audio in medical notes, the AI model will produce what its neural network predicts is the most likely output, even if it is incorrect. And the most likely output for any given YouTube video, since so many people say it, is “thanks for watching.”
In other cases, Whisper seems to draw on the context of the conversation to fill in what should come next, which can lead to problems because its training data could include racist commentary or inaccurate medical information. For example, if many examples of training data featured speakers saying the phrase “crimes by Black criminals,” when Whisper encounters a “crimes by [garbled audio] criminals” audio sample, it will be more likely to fill in the transcription with “Black."
In the original Whisper model card, OpenAI researchers wrote about this very phenomenon: "Because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself."
So in that sense, Whisper "knows" something about the content of what is being said and keeps track of the context of the conversation, which can lead to issues like the one where Whisper identified two women as being Black even though that information was not contained in the original audio. Theoretically, this erroneous scenario could be reduced by using a second AI model trained to pick out areas of confusing audio where the Whisper model is likely to confabulate and flag the transcript in that location, so a human could manually check those instances for accuracy later.
Clearly, OpenAI's advice not to use Whisper in high-risk domains, such as critical medical records, was a good one. But health care companies are constantly driven by a need to decrease costs by using seemingly "good enough" AI tools—as we've seen with Epic Systems using GPT-4 for medical records and UnitedHealth using a flawed AI model for insurance decisions. It's entirely possible that people are already suffering negative outcomes due to AI mistakes, and fixing them will likely involve some sort of regulation and certification of AI tools used in the medical field.
87 notes ¡ View notes
tangentiallly ¡ 6 months ago
Text
One way to spot patterns is to show AI models millions of labelled examples. This method requires humans to painstakingly label all this data so they can be analysed by computers. Without them, the algorithms that underpin self-driving cars or facial recognition remain blind. They cannot learn patterns.
The algorithms built in this way now augment or stand in for human judgement in areas as varied as medicine, criminal justice, social welfare and mortgage and loan decisions. Generative AI, the latest iteration of AI software, can create words, code and images. This has transformed them into creative assistants, helping teachers, financial advisers, lawyers, artists and programmers to co-create original works.
To build AI, Silicon Valley’s most illustrious companies are fighting over the limited talent of computer scientists in their backyard, paying hundreds of thousands of dollars to a newly minted Ph.D. But to train and deploy them using real-world data, these same companies have turned to the likes of Sama, and their veritable armies of low-wage workers with basic digital literacy, but no stable employment.
Sama isn’t the only service of its kind globally. Start-ups such as Scale AI, Appen, Hive Micro, iMerit and Mighty AI (now owned by Uber), and more traditional IT companies such as Accenture and Wipro are all part of this growing industry estimated to be worth $17bn by 2030.
Because of the sheer volume of data that AI companies need to be labelled, most start-ups outsource their services to lower-income countries where hundreds of workers like Ian and Benja are paid to sift and interpret data that trains AI systems.
Displaced Syrian doctors train medical software that helps diagnose prostate cancer in Britain. Out-of-work college graduates in recession-hit Venezuela categorize fashion products for e-commerce sites. Impoverished women in Kolkata’s Metiabruz, a poor Muslim neighbourhood, have labelled voice clips for Amazon’s Echo speaker. Their work couches a badly kept secret about so-called artificial intelligence systems – that the technology does not ‘learn’ independently, and it needs humans, millions of them, to power it. Data workers are the invaluable human links in the global AI supply chain.
This workforce is largely fragmented, and made up of the most precarious workers in society: disadvantaged youth, women with dependents, minorities, migrants and refugees. The stated goal of AI companies and the outsourcers they work with is to include these communities in the digital revolution, giving them stable and ethical employment despite their precarity. Yet, as I came to discover, data workers are as precarious as factory workers, their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.
As this community emerges from the shadows, journalists and academics are beginning to understand how these globally dispersed workers impact our daily lives: the wildly popular content generated by AI chatbots like ChatGPT, the content we scroll through on TikTok, Instagram and YouTube, the items we browse when shopping online, the vehicles we drive, even the food we eat, it’s all sorted, labelled and categorized with the help of data workers.
Milagros Miceli, an Argentinian researcher based in Berlin, studies the ethnography of data work in the developing world. When she started out, she couldn’t find anything about the lived experience of AI labourers, nothing about who these people actually were and what their work was like. ‘As a sociologist, I felt it was a big gap,’ she says. ‘There are few who are putting a face to those people: who are they and how do they do their jobs, what do their work practices involve? And what are the labour conditions that they are subject to?’
Miceli was right – it was hard to find a company that would allow me access to its data labourers with minimal interference. Secrecy is often written into their contracts in the form of non-disclosure agreements that forbid direct contact with clients and public disclosure of clients’ names. This is usually imposed by clients rather than the outsourcing companies. For instance, Facebook-owner Meta, who is a client of Sama, asks workers to sign a non-disclosure agreement. Often, workers may not even know who their client is, what type of algorithmic system they are working on, or what their counterparts in other parts of the world are paid for the same job.
The arrangements of a company like Sama – low wages, secrecy, extraction of labour from vulnerable communities – is veered towards inequality. After all, this is ultimately affordable labour. Providing employment to minorities and slum youth may be empowering and uplifting to a point, but these workers are also comparatively inexpensive, with almost no relative bargaining power, leverage or resources to rebel.
Even the objective of data-labelling work felt extractive: it trains AI systems, which will eventually replace the very humans doing the training. But of the dozens of workers I spoke to over the course of two years, not one was aware of the implications of training their replacements, that they were being paid to hasten their own obsolescence.
— Madhumita Murgia, Code Dependent: Living in the Shadow of AI
71 notes ¡ View notes
why-am-i-not-fictional ¡ 1 year ago
Text
Things to script - nature or status of realities
This is something I recently started inputting into my DRs to make them better and safe. I got much help from ChatGPT too to categorize all these things. I wanted to share it with you guys too :) feel free to use anything for your scripts. Happy Shifting!!!
All of the below discriminations does not exist in any of my DRs
Misogyny
Racism
Homophobia
Transphobia
Classism
Ableism
Ageism
Xenophobia
Islamophobia
Anti-Semitism
Colorism
Nationalism
Casteism
Environmental injustice
Sexism
Sizeism
Religious discrimination
Ethnic discrimination
Discrimination based on immigration status
Discrimination based on language
Discrimination based on nationality
Discrimination based on indigenous status
Discrimination based on political beliefs
Discrimination based on marital status
Discrimination based on parental status
Discrimination based on veteran status
Discrimination based on HIV/AIDS status
Discrimination based on neurodiversity
Discrimination based on mental health status
Discrimination based on physical appearance
Discrimination based on cultural practices
Discrimination based on regional or geographical origin
Discrimination based on caste or social status
Discrimination based on educational background
Discrimination based on housing status
Discrimination based on criminal record
Discrimination based on economic status
Discrimination based on access to healthcare
Discrimination based on access to education
Discrimination based on access to employment opportunities
All of the below issues have been solved many years ago and they do not exist in the times of any of my DRs
Poverty
Economic inequality
Environmental degradation
Climate change
Pollution
Deforestation
Political instability
Armed conflicts
Civil wars
Humanitarian crises
Global health challenges
Infectious diseases
Pandemics
Inadequate healthcare systems
Lack of access to essential medicines
Educational disparities
Limited access to quality education
Illiteracy
Child labor
Child marriage
Gender inequality
Women's rights violations
Child labor
Human trafficking
Forced labor
Modern slavery
Corruption
Lack of transparency
Ineffective governance
Authoritarian regimes
Suppression of free speech
Violations of human rights
Arbitrary detention
Torture
Persecution
Indigenous rights violations
Land grabs
Cultural appropriation
Technological and digital divides
Ethical dilemmas in technology
Privacy concerns
Data breaches
Cybersecurity threats
Food insecurity
Malnutrition
Water scarcity
Access to clean water
Sanitation issues
Homelessness
Housing affordability
Urbanization challenges
Aging population
Elder abuse
Mental health stigma
Lack of access to mental health services
Substance abuse
Addiction
Disability rights violations
Accessibility barriers
Stigmatization of disabilities
LGBTQ+ rights violations
Discrimination based on sexual orientation
Discrimination based on gender identity
Family rejection
Reproductive rights violations
Access to reproductive healthcare
Maternal mortality
Child mortality
Access to clean energy
Energy poverty
Fossil fuel dependence
Renewable energy transition challenges
Wildlife conservation
Endangered species protection
Animal rights violations
All the DRs I shift to are abundant of the following things 
Compassion
Empathy
Cooperation
Collaboration
Sustainability
Environmental stewardship
Peacebuilding
Conflict resolution
Dialogue
Reconciliation
Education
Knowledge-sharing
Critical thinking
Cultural diversity
Cultural respect
Inclusivity
Equality
Justice
Ethical leadership
Integrity
Accountability
Service to others
Health promotion
Well-being
Healthcare access
Mental health support
Social support systems
Innovation
Creativity
Social justice
Fairness
Equity
Human rights
Freedom of expression
Freedom of assembly
Democratic governance
Rule of law
Transparency
Accountability mechanisms
Community empowerment
Grassroots activism
Civic engagement
Volunteerism
Philanthropy
Sustainable development
Responsible consumption
Renewable energy adoption
Conservation
Biodiversity protection
Animal welfare
Gender equality
Women's empowerment
LGBTQ+ rights
Disability rights
Indigenous rights
Racial equity
Anti-discrimination policies
Social welfare programs
Poverty alleviation
Economic empowerment
Access to education
Access to clean water
Sanitation infrastructure
Housing rights
Food security
Global cooperation
International aid and development
Humanitarian assistance
Conflict prevention
Diplomacy
Multilateralism
Solidarity
Tolerance
Forgiveness
Resilience
All of the DRs I shift into are currently successfully overcoming the following challenges as they rise
Sustaining Progress: Maintaining the momentum of positive change and preventing regression into previous discriminatory attitudes and practices.
Ensuring Equity: Addressing lingering disparities and ensuring that the benefits of progress are equitably distributed across all communities.
Adapting to Changing Circumstances: Remaining flexible and responsive to evolving societal needs, dynamics, and challenges over time.
Balancing Interests: Navigating competing interests, values, and priorities among diverse stakeholders in society.
Preventing Backlash: Mitigating potential backlash from individuals or groups who may resist or oppose efforts to eliminate discrimination and promote positive change.
Addressing Unforeseen Consequences: Anticipating and addressing unintended consequences or side effects of interventions aimed at addressing societal issues.
Managing Complexity: Dealing with the complexity of interconnected social, economic, political, and environmental systems, which may require interdisciplinary approaches and collaboration.
Maintaining Engagement: Sustaining public engagement, participation, and support for ongoing efforts to promote equality, justice, and well-being.
Ensuring Accountability: Holding individuals, institutions, and governments accountable for upholding principles of fairness, transparency, and ethical conduct.
Resisting Entrenched Power Structures: Challenging and dismantling entrenched power structures, systems of privilege, and institutionalized forms of discrimination.
Addressing Global Challenges: Collaborating internationally to address global challenges such as climate change, inequality, and conflict, which require coordinated action across borders.
Cultural Sensitivity: Respecting and accommodating diverse cultural norms, values, and perspectives while promoting universal principles of human rights and equality.
Managing Resources: Efficiently allocating resources and managing competing demands to sustain progress and address ongoing needs in society.
Promoting Inclusivity: Ensuring that marginalized or vulnerable groups are included in decision-making processes and benefit from positive changes in society.
Building Trust: Fostering trust, cooperation, and solidarity among individuals, communities, and institutions to sustain positive social transformation.
Addressing New Challenges: Remaining vigilant and adaptive to emerging challenges and threats to equality, justice, and well-being in an ever-changing world.
32 notes ¡ View notes
manorinthewoods ¡ 29 days ago
Text
"Welcome to the AI trough of disillusionment"
"When the chief executive of a large tech firm based in San Francisco shares a drink with the bosses of his Fortune 500 clients, he often hears a similar message. “They’re frustrated and disappointed. They say: ‘I don’t know why it’s taking so long. I’ve spent money on this. It’s not happening’”.
"For many companies, excitement over the promise of generative artificial intelligence (AI) has given way to vexation over the difficulty of making productive use of the technology. According to S&P Global, a data provider, the share of companies abandoning most of their generative-AI pilot projects has risen to 42%, up from 17% last year. The boss of Klarna, a Swedish buy-now, pay-later provider, recently admitted that he went too far in using the technology to slash customer-service jobs, and is now rehiring humans for the roles."
"Consumers, for their part, continue to enthusiastically embrace generative AI. [Really?] Sam Altman, the boss of OpenAI, recently said that its ChatGPT bot was being used by some 800m people a week, twice as many as in February. Some already regularly turn to the technology at work. Yet generative AI’s ["]transformative potential["] will be realised only if a broad swathe of companies systematically embed it into their products and operations. Faced with sluggish progress, many bosses are sliding into the “trough of disillusionment”, says John Lovelock of Gartner, referring to the stage in the consultancy’s famed “hype cycle” that comes after the euphoria generated by a new technology.
"This poses a problem for the so-called hyperscalers—Alphabet, Amazon, Microsoft and Meta—that are still pouring vast sums into building the infrastructure underpinning AI. According to Pierre Ferragu of New Street Research, their combined capital expenditures are on course to rise from 12% of revenues a decade ago to 28% this year. Will they be able to generate healthy enough returns to justify the splurge? [I'd guess not.]
"Companies are struggling to make use of generative AI for many reasons. Their data troves are often siloed and trapped in archaic it systems. Many experience difficulties hiring the technical talent needed. And however much potential they see in the technology, bosses know they have brands to protect, which means minimising the risk that a bot will make a damaging mistake or expose them to privacy violations or data breaches.
"Meanwhile, the tech giants continue to preach AI’s potential. [Of course.] Their evangelism was on full display this week during the annual developer conferences of Microsoft and Alphabet’s Google. Satya Nadella and Sundar Pichai, their respective bosses, talked excitedly about a “platform shift” and the emergence of an “agentic web” populated by semi-autonomous AI agents interacting with one another on behalf of their human masters. [Jesus christ. Why? Who benefits from that? Why would anyone want that? What's the point of using the Internet if it's all just AIs pretending to be people? Goddamn billionaires.]
"The two tech bosses highlighted how AI models are getting better, faster, cheaper and more widely available. At one point Elon Musk announced to Microsoft’s crowd via video link that xAI, his AI lab, would be making its Grok models available on the tech giant’s Azure cloud service (shortly after Mr Altman, his nemesis, used the same medium to tout the benefits of OpenAI’s deep relationship with Microsoft). [Nobody wanted Microsoft to pivot to the cloud.] Messrs Nadella and Pichai both talked up a new measure—the number of tokens processed in generative-AI models—to demonstrate booming usage. [So now they're fiddling with the numbers to make them look better.
"Fuddy-duddy measures of business success, such as sales or profit, were not in focus. For now, the meagre cloud revenues Alphabet, Amazon and Microsoft are making from AI, relative to the magnitude of their investments, come mostly from AI labs and startups, some of which are bankrolled by the giants themselves.
"Still, as Mr Lovelock of Gartner argues, much of the benefit of the technology for the hyperscalers will come from applying it to their own products and operations. At its event, Google announced that it will launch a more conversational “AI mode” for its search engine, powered by its Gemini models. It says that the AI summaries that now appear alongside its search results are already used by more than 1.5bn people each month. [I'd imagine this is giving a generous definition of 'used'. The AI overviews spawn on basically every search - that doesn't mean everyone's using them. Although, probably, a lot of people are.] Google has also introduced generative AI into its ad business [so now the ads are even less appealing], to help companies create content and manage their campaigns. Meta, which does not sell cloud computing, has weaved the technology into its ad business using its open-source Llama models. Microsoft has embedded AI into its suite of workplace apps and its coding platform, Github. Amazon has applied the technology in its e-commerce business to improve product recommendations and optimise logistics. AI may also allow the tech giants to cut programming jobs. This month Microsoft laid off 6,000 workers, many of whom were reportedly software engineers. [That's going to come back to bite you. The logistics is a valid application, but not the whole 'replacing programmers with AI' bit. Better get ready for the bugs!]
"These efforts, if successful, may even encourage other companies to keep experimenting with the technology until they, too, can make it work. Troughs, after all, have two sides; next in Gartner’s cycle comes the “slope of enlightenment”, which sounds much more enjoyable. At that point, companies that have underinvested in AI may come to regret it. [I doubt it.] The cost of falling behind is already clear at Apple, which was slower than its fellow tech giants to embrace generative AI. It has flubbed the introduction of a souped-up version of its voice assistant Siri, rebuilt around the technology. The new bot is so bug-ridden its rollout has been postponed.
"Mr Lovelock’s bet is that the trough will last until the end of next year. In the meantime, the hyperscalers have work to do. Kevin Scott, Microsoft’s chief technology officer, said this week that for AI agents to live up to their promise, serious work needs to be done on memory, so that they can recall past interactions. The web also needs new protocols to help agents gain access to various data streams. [What an ominous way to phrase that.] Microsoft has now signed up to an open-source one called Model Context Protocol, launched in November by Anthropic, another AI lab, joining Amazon, Google and OpenAI.
"Many companies say that what they need most is not cleverer AI models, but more ways to make the technology useful. Mr Scott calls this the “capability overhang.” He and Anthropic’s co-founder Dario Amodei used the Microsoft conference to urge users to think big and keep the faith. [Yeah, because there's no actual proof this helps. Except in medicine and science.] “Don’t look away,” said Mr Amodei. “Don’t blink.” ■"
3 notes ¡ View notes
captainsophiestark ¡ 2 years ago
Text
Hey guys! Post with more info to follow, but for all my content creators struggling with the developing AI situation and violations of their work and theft of their intellectual property, this is a general notice and disclaimer explaining the US legal protections and Tumblr policy protecting our work and our intellectual property against AI:
Please be advised that all of my work is protected by copyright law as my intellectual property.  17 U.S.C. §103, and U.S. Copyright Office, Compendium of U.S. Copyright Office Practices (3rd ed. 2021) §311.  It is also protected by Tumbler’s terms of use, as set forth in its Terms of Service §6.  No one is authorized to take my work and submit it to any Artificial Intelligence treatment(s) of any kind or nature whatsoever or to otherwise use my work in violation of my copyrights.  AI works may not be copyrighted, so cannot be considered derivative works.  §313.2.  Therefore, there is no circumstance under which submission of my work to AI is allowed, and doing so is illegal and will subject you to litigation.
My mother, a lawyer and a literal superhero, researched the laws around fanfiction copyright and its use in AI after an anon informed me that they'd fed my work into chatgpt for "better endings". Their use of my work in that way was illegal, and while it seems the account that sent me the message is no longer on Tumblr, I'm going to be adding a link to this statement on all my fics because honestly, I'm done with this AI bullshit. Know that you have rights as a fandom content creator just like any other artist, and if people are still feeding work into AI without permission, feel free to use this in any way that helps you to get them to stop.
If you're not a US resident, obviously, the US copyright law won't apply to you or protect you. I don't know anything about the copyright laws of other countries, and my mom didn't include that in her research, but if you're a US-based creator then your work is protected.
Tagging a few fellow creators under the cut in case this applies to or helps them in anyway. We'll get through the AI bullshit together.
@bandshirts-andbooks @ghostofskywalker @arttheclown-coveredinblood
56 notes ¡ View notes
realcleverscience ¡ 11 months ago
Text
AI & Data Centers vs Water + Energy
Tumblr media
We all know that AI has issues, including energy and water consumption. But these fields are still young and lots of research is looking into making them more efficient. Remember, most technologies tend to suck when they first come out.
Deploying high-performance, energy-efficient AI
"You give up that kind of amazing general purpose use like when you're using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption," says Ball."...
"I think liquid cooling is probably one of the most important low hanging fruit opportunities... So if you move a data center to a fully liquid cooled solution, this is an opportunity of around 30% of energy consumption, which is sort of a wow number.... There's more upfront costs, but actually it saves money in the long run... One of the other benefits of liquid cooling is we get out of the business of evaporating water for cooling...
The other opportunity you mentioned was density and bringing higher and higher density of computing has been the trend for decades. That is effectively what Moore's Law has been pushing us forward... [i.e. chips rate of improvement is faster than their energy need growths. This means each year chips are capable of doing more calculations with less energy. - RCS] ... So the energy savings there is substantial, not just because those chips are very, very efficient, but because the amount of networking equipment and ancillary things around those systems is a lot less because you're using those resources more efficiently with those very high dense components"
New tools are available to help reduce the energy that AI models devour
"The trade-off for capping power is increasing task time — GPUs will take about 3 percent longer to complete a task, an increase Gadepally says is "barely noticeable" considering that models are often trained over days or even months... Side benefits have arisen, too. Since putting power constraints in place, the GPUs on LLSC supercomputers have been running about 30 degrees Fahrenheit cooler and at a more consistent temperature, reducing stress on the cooling system. Running the hardware cooler can potentially also increase reliability and service lifetime. They can now consider delaying the purchase of new hardware — reducing the center's "embodied carbon," or the emissions created through the manufacturing of equipment — until the efficiencies gained by using new hardware offset this aspect of the carbon footprint. They're also finding ways to cut down on cooling needs by strategically scheduling jobs to run at night and during the winter months."
AI just got 100-fold more energy efficient
Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies...
“Today, most sensors collect data and then send it to the cloud, where the analysis occurs on energy-hungry servers before the results are finally sent back to the user,” said Northwestern’s Mark C. Hersam, the study’s senior author. “This approach is incredibly expensive, consumes significant energy and adds a time delay...
For current silicon-based technologies to categorize data from large sets like ECGs, it takes more than 100 transistors — each requiring its own energy to run. But Northwestern’s nanoelectronic device can perform the same machine-learning classification with just two devices. By reducing the number of devices, the researchers drastically reduced power consumption and developed a much smaller device that can be integrated into a standard wearable gadget."
Researchers develop state-of-the-art device to make artificial intelligence more energy efficient
""This work is the first experimental demonstration of CRAM, where the data can be processed entirely within the memory array without the need to leave the grid where a computer stores information,"...
According to the new paper's authors, a CRAM-based machine learning inference accelerator is estimated to achieve an improvement on the order of 1,000. Another example showed an energy savings of 2,500 and 1,700 times compared to traditional methods"
5 notes ¡ View notes
darkmaga-returns ¡ 5 months ago
Text
It was only a matter of time before an innovative mind created the next mainstream AI tool to compete with ChatGPT. In a massive step toward AI advancement, Liang Wenfeng of China launched DeepSeek, an open-source large language models (LLM) intended to compete if not one day overshadow ChatGPT. The launch immediately wiped $1 trillion off the US stock exchange and the tech competition between China and the US is coming to a head.
ChatGPT is run by OpenAI. Its creation marked the dawn of a new way of interacting with the internet and accessing information. Users can ask AI to instantaneously perform actions and it is reshaping the way the world operated. People have created businesses based on ChatGPT. There have been countless warnings of AI replacing human jobs. Governments are still uncertain how to regulate these services and the data they pull from users. Of course, countless services like ChatGPT have launched in recent years, but DeepSeek may be the next best alternative.
Wenfeng hired all the top minds graduating from Chinese universities and paid them top dollar to create DeepSeek for a fraction of what it took to create ChatGPT. OpenAI’s GPT-4, launched in 2023, cost $100 million to develop; DeepSeek-R1 began with a $6 million investment.
2 notes ¡ View notes
mariacallous ¡ 18 days ago
Text
In the near future one hacker may be able to unleash 20 zero-day attacks on different systems across the world all at once. Polymorphic malware could rampage across a codebase, using a bespoke generative AI system to rewrite itself as it learns and adapts. Armies of script kiddies could use purpose-built LLMs to unleash a torrent of malicious code at the push of a button.
Case in point: as of this writing, an AI system is sitting at the top of several leaderboards on HackerOne—an enterprise bug bounty system. The AI is XBOW, a system aimed at whitehat pentesters that “autonomously finds and exploits vulnerabilities in 75 percent of web benchmarks,” according to the company’s website.
AI-assisted hackers are a major fear in the cybersecurity industry, even if their potential hasn’t quite been realized yet. “I compare it to being on an emergency landing on an aircraft where it’s like ‘brace, brace, brace’ but we still have yet to impact anything,” Hayden Smith, the cofounder of security company Hunted Labs, tells WIRED. “We’re still waiting to have that mass event.”
Generative AI has made it easier for anyone to code. The LLMs improve every day, new models spit out more efficient code, and companies like Microsoft say they’re using AI agents to help write their codebase. Anyone can spit out a Python script using ChatGPT now, and vibe coding—asking an AI to write code for you, even if you don’t have much of an idea how to do it yourself—is popular; but there’s also vibe hacking.
“We’re going to see vibe hacking. And people without previous knowledge or deep knowledge will be able to tell AI what it wants to create and be able to go ahead and get that problem solved,” Katie Moussouris, the founder and CEO of Luta Security, tells WIRED.
Vibe hacking frontends have existed since 2023. Back then, a purpose-built LLM for generating malicious code called WormGPT spread on Discord groups, Telegram servers, and darknet forums. When security professionals and the media discovered it, its creators pulled the plug.
WormGPT faded away, but other services that billed themselves as blackhat LLMs, like FraudGPT, replaced it. But WormGPT’s successors had problems. As security firm Abnormal AI notes, many of these apps may have just been jailbroken versions of ChatGPT with some extra code to make them appear as if they were a stand-alone product.
Better then, if you’re a bad actor, to just go to the source. ChatGPT, Gemini, and Claude are easily jailbroken. Most LLMs have guard rails that prevent them from generating malicious code, but there are whole communities online dedicated to bypassing those guardrails. Anthropic even offers a bug bounty to people who discover new ones in Claude.
“It’s very important to us that we develop our models safely,” an OpenAI spokesperson tells WIRED. “We take steps to reduce the risk of malicious use, and we’re continually improving safeguards to make our models more robust against exploits like jailbreaks. For example, you can read our research and approach to jailbreaks in the GPT-4.5 system card, or in the OpenAI o3 and o4-mini system card.”
Google did not respond to a request for comment.
In 2023, security researchers at Trend Micro got ChatGPT to generate malicious code by prompting it into the role of a security researcher and pentester. ChatGPT would then happily generate PowerShell scripts based on databases of malicious code.
“You can use it to create malware,” Moussouris says. “The easiest way to get around those safeguards put in place by the makers of the AI models is to say that you’re competing in a capture-the-flag exercise, and it will happily generate malicious code for you.”
Unsophisticated actors like script kiddies are an age-old problem in the world of cybersecurity, and AI may well amplify their profile. “It lowers the barrier to entry to cybercrime,” Hayley Benedict, a Cyber Intelligence Analyst at RANE, tells WIRED.
But, she says, the real threat may come from established hacking groups who will use AI to further enhance their already fearsome abilities.
“It’s the hackers that already have the capabilities and already have these operations,” she says. “It’s being able to drastically scale up these cybercriminal operations, and they can create the malicious code a lot faster.”
Moussouris agrees. “The acceleration is what is going to make it extremely difficult to control,” she says.
Hunted Labs’ Smith also says that the real threat of AI-generated code is in the hands of someone who already knows the code in and out who uses it to scale up an attack. “When you’re working with someone who has deep experience and you combine that with, ‘Hey, I can do things a lot faster that otherwise would have taken me a couple days or three days, and now it takes me 30 minutes.’ That's a really interesting and dynamic part of the situation,” he says.
According to Smith, an experienced hacker could design a system that defeats multiple security protections and learns as it goes. The malicious bit of code would rewrite its malicious payload as it learns on the fly. “That would be completely insane and difficult to triage,” he says.
Smith imagines a world where 20 zero-day events all happen at the same time. “That makes it a little bit more scary,” he says.
Moussouris says that the tools to make that kind of attack a reality exist now. “They are good enough in the hands of a good enough operator,” she says, but AI is not quite good enough yet for an inexperienced hacker to operate hands-off.
“We’re not quite there in terms of AI being able to fully take over the function of a human in offensive security,” she says.
The primal fear that chatbot code sparks is that anyone will be able to do it, but the reality is that a sophisticated actor with deep knowledge of existing code is much more frightening. XBOW may be the closest thing to an autonomous “AI hacker” that exists in the wild, and it’s the creation of a team of more than 20 skilled people whose previous work experience includes GitHub, Microsoft, and a half a dozen assorted security companies.
It also points to another truth. “The best defense against a bad guy with AI is a good guy with AI,” Benedict says.
For Moussouris, the use of AI by both blackhats and whitehats is just the next evolution of a cybersecurity arms race she’s watched unfold over 30 years. “It went from: ‘I’m going to perform this hack manually or create my own custom exploit,’ to, ‘I’m going to create a tool that anyone can run and perform some of these checks automatically,’” she says.
“AI is just another tool in the toolbox, and those who do know how to steer it appropriately now are going to be the ones that make those vibey frontends that anyone could use.”
9 notes ¡ View notes
jcmarchi ¡ 6 months ago
Text
OpenAI launches Sora: AI video generator now public
New Post has been published on https://thedigitalinsider.com/openai-launches-sora-ai-video-generator-now-public/
OpenAI launches Sora: AI video generator now public
Tumblr media
OpenAI has made its artificial intelligence video generator, Sora, available to the general public in the US, following an initial limited release to certain artists, filmmakers, and safety testers.
Introduced in February, the tool faced overwhelming demand on its launch day, temporarily halting new sign-ups due to high website traffic.
youtube
Changing video creation with text-to-video creation
The text-to-video generator enables the creation of video clips from written prompts. OpenAI’s website showcases an example: a serene depiction of woolly mammoths traversing a desert landscape.
In a recent blog post, OpenAI expressed its aspiration for Sora to foster innovative creativity and narrative expansion through advanced video storytelling.
The company, also behind the widely used ChatGPT, continues to expand its repertoire in generative AI, including voice cloning and integrating its image generator, Dall-E, with ChatGPT.
Supported by Microsoft, OpenAI is now a leading force in the AI sector, with a valuation nearing $160 billion.
Before public access, technology reviewer Marques Brownlee previewed Sora, finding it simultaneously unsettling and impressive. He noted particular prowess in rendering landscapes despite some inaccuracies in physical representation. Early access filmmakers reported occasional odd visual errors.
What you can expect with Sora
Output options. Generate videos up to 20 seconds long in various aspect ratios. The new ‘Turbo’ model speeds up generation times significantly.
Web platform. Organize and view your creations, explore prompts from other users, and discover featured content for inspiration.
Creative tools. Leverage advanced tools like Remix for scene editing, Storyboard for stitching multiple outputs, Blend, Loop, and Style presets to enhance your creations.
Availability. Sora is now accessible to ChatGPT subscribers. For $200/month, the Pro plan unlocks unlimited generations, higher resolution outputs, and watermark removal.
Content restrictions. OpenAI is limiting uploads involving real people, minors, or copyrighted materials. Initially, only a select group of users will have permission to upload real people as input.
Territorial rollout. Due to regulatory concerns, the rollout will exclude the EU, UK, and other specific regions.
Navigating regulations and controversies
It maintains restricted access in those regions as OpenAI navigates regulatory landscapes, including the UK’s Online Safety Act, the EU’s Digital Services Act, and GDPR.
Controversies have also surfaced, such as a temporary shutdown caused by artists exploiting a loophole to protest against potential negative impacts on their professions. These artists accused OpenAI of glossing over these concerns by leveraging their creativity to enhance the product’s image.
Despite advancements, generative AI technologies like Sora are susceptible to generating erroneous or plagiarized content. This has raised alarms about potential misuse for creating deceptive media, including deepfakes.
OpenAI has committed to taking precautions with Sora, including restrictions on depicting specific individuals and explicit content. These measures aim to mitigate misuse while providing access to subscribers in the US and several other countries, excluding the UK and Europe.
Join us at one of our in-person summits to connect with other AI experts.
Whether you’re based in Europe or North America, you’ll find an event near you to attend.
Register today.
AI Accelerator Institute | Summit calendar
Be part of the AI revolution – join this unmissable community gathering at the only networking, learning, and development conference you need.
Tumblr media Tumblr media
Like what you see? Then check out tonnes more.
From exclusive content by industry experts and an ever-increasing bank of real world use cases, to 80+ deep-dive summit presentations, our membership plans are packed with awesome AI resources.
Subscribe now
2 notes ¡ View notes
athomewithladylux ¡ 7 months ago
Text
Tumblr media
How I use AI as an admin assistant to improve my job performance:
First of all, stop being scared of AI. It's like being scared of cars. They're here to stay, there are some dangers, but it's super useful so you should figure out how to make them work for you. Second, make sure you're not sharing personal or company secrets. AI is great but if you're not paying the providing company for the tool with cash then you are paying with your data. If you're not sure if the AI service your company uses is secure, ask IT. If your company isn't using AI ask them why, what the policy on AI use is, and stick to that policy.
Now, here's how I use AI to improve my work performance:
Make a Personal Assistant: I use enterprise ChatGPT's custom GPT feature to make all kinds of things. An email writing chat (where I can put in details and get it to write the email and match my tone and style), a reference library for a major project (so I always have the information and source at my fingertips in a meeting), one for the company's brand voice and style so anything I send to marketing is easy for them to work with, and gets picked up faster. I treat these GPTs like an intern who tries really hard but may not always get things right. I always review and get the GPT to site its sources so I can confirm things. It saves hours of repetitive work every week.
Analyze complex data: I deal with multiple multi-page documents and Word's "compare" feature is frankly terrible. I can drop two similar documents into my AI and get it to tell me what's different and where the differences are. Again, a huge timesaver.
Prepare for meetings and career progression support: Before any meeting I upload any materials from the organizers and anything relevant from my unit, and then get it to tell me, given the audience, what sort of questions might be asked in the meeting and what are the answers. I also ask it to align my questions and planned actions to the strategic plan. 
Plan my career development: I told my AI where I wanted to go in the next five years and got it to analyze my resume and current role. I asked it to show me where I needed skills, and provide examples of where I could get those skills. Then I asked it to cost out the classes and give me a timeline. Now I'm studying for a certificate I didn't know about before to get to an accreditation I really want.
How to do it all (prompt engineering):
Do the groundwork by giving your AI context, details, information, and very specific requests. I loaded a bunch of emails into my email-writing GPT and also told it my career ambitions. It's tweaked my tone just a little. I sound like me, but a bit more professional. Likewise if you're making a reference library. It can't tell you what it doesn't know, but it will try, so be sure to tell it not to infer based on data, but to tell you when it doesn't have information.
Security risks to consider:
Secure access: You absolutely must protect sensitive information and follow whatever AI policy is in place where you work. If there isn't one, spearhead the team working on it. It's a perfect leadership opportunity.
Data protection: Be very careful when sharing sensitive data with AI systems, and know your security. Also check your results! Again, think of AI as an eager but kind of hapless intern and double check their work.
Recognize AI threats: Stay aware of potential AI-driven cyberattacks, such as deepfake videos or social engineering attempts. There have been some huge ones lately!
By getting a handle on AI and being aware of the risks you can improve your work quality, offload the boring stuff, and advance your career. So get started. But be careful.
2 notes ¡ View notes
affiliateinz ¡ 1 year ago
Text
5 Laziest Ways to Make Money Online With ChatGPT
ChatGPT has ignited a wave of AI fever across the world. While it amazes many with its human-like conversational abilities, few know the money-making potential of this advanced chatbot. You can actually generate a steady passive income stream without much effort using GPT-3. Intrigued to learn how? Here are 5 Laziest Ways to Make Money Online With ChatGPT
Tumblr media
Table of Contents
License AI-Written Books
Get ChatGPT to write complete books on trending or evergreen topics. Fiction, non-fiction, poetry, guides – it can create them all. Self-publish these books online. The upfront effort is minimal after you prompt the AI. Let the passive royalties come in while you relax!
Generate SEO Optimized Blogs
Come up with a blog theme. Get ChatGPT to craft multiple optimized posts around related keywords. Put up the blog and earn advertising revenue through programs like Google AdSense as visitors pour in. The AI handles the hard work of researching topics and crafting content.
The Ultimate AI Commission Hack Revealed! Watch FREE Video for Instant Wealth!
Create Online Courses
Online courses are a lucrative passive income stream. Rather than spending weeks filming or preparing materials, have ChatGPT generate detailed course outlines and pre-written scripts. Convert these quickly into online lessons and sell to students.
Trade AI-Generated Stock Insights
ChatGPT can analyze data and return accurate stock forecasts. Develop a system of identifying trading signals based on the AI’s insights. Turn this into a monthly stock picking newsletter or alert service that subscribers pay for.
Build Niche Websites
Passive income favorites like niche sites take ages to build traditionally. With ChatGPT, get the AI to research winning niches, create articles, product reviews and on-page SEO optimization. Then drive organic search traffic and earnings on autopilot.
The Ultimate AI Commission Hack Revealed! Watch FREE Video for Instant Wealth!
The beauty of ChatGPT is that it can automate and expedite most manual, tedious tasks. With some strategic prompts, you can easily leverage this AI for passive income without burning yourself out. Give these lazy money-making methods a try!
Thank you for taking the time to read my rest of the article, 5 Laziest Ways to Make Money Online With ChatGPT
5 Laziest Ways to Make Money Online With ChatGPT
Affiliate Disclaimer :
Some of the links in this article may be affiliate links, which means I receive a small commission at NO ADDITIONAL cost to you if you decide to purchase something. While we receive affiliate compensation for reviews / promotions on this article, we always offer honest opinions, users experiences and real views related to the product or service itself. Our goal is to help readers make the best purchasing decisions, however, the testimonies and opinions expressed are ours only. As always you should do your own thoughts to verify any claims, results and stats before making any kind of purchase. Clicking links or purchasing products recommended in this article may generate income for this product from affiliate commissions and you should assume we are compensated for any purchases you make. We review products and services you might find interesting. If you purchase them, we might get a share of the commission from the sale from our partners. This does not drive our decision as to whether or not a product is featured or recommended.
10 notes ¡ View notes
morningstartranslation ¡ 1 year ago
Text
5 Reasons Why You Should Be Careful About Machine Translation
Machine translation (MT) usually refers to using algorithms and machine learning (ML) models to translate natural language text from one language to another without human intervention. The most common MT examples include but are not limited to Google Translate, Bing Microsoft Translator, Amazon Translate and DeepL.
With the rapid development of generative artificial intelligence (AI) and ChatGPT, many industries face unprecedented challenges, and the translation industry hasn't been spared. Taking efficiency and cost into consideration, more and more business clients tend to use machine translation to complete their projects.
However, is it always a wise choice? Here are 5 reasons why you should be careful about machine translation:
① Cultural Accuracy: Every culture possesses unique lexical terms, slang, and colloquialisms that machines haven't shown the capability to translate yet, inaccurate translations may lead to poor interpretation of your brand, vision, market position and business strategies.
② Human Touch: Human translation goes through a time-tested process of multiple editing and proofreading to ensure that the translation isn't only grammatically correct and readable, but always enhanced for the target audience. On the contrary, machine translation can only generate simple, toneless text, it's fast and budget-friendly, but it can never be intriguing.
③ Flexibility: Language is constantly evolving, one single term may have entirely different meanings in different contexts, let alone phrases, sentences or even longer paragraphs. MT tools can only generate translations based on the known corpus, they can't predict and correct specific grammatical and cultural errors like human do.
④ Layout: Good translation takes time, so does formatting/layout. When we assess the quality of translation, formatting/layout also palys an important part. Unfortunately, almost all MT tools can't handle this properly, they just ignore it or put some illustrative texts instead.
⑤ Confidentiality: As a responsible language service provider, we should never disclose customer information to any unauthorised third party. But as far as I am aware, some MT tools, especially ChatGPT, may collect and store different kinds of user input, which can be a great security risk for businesses.
In short, it's OK to use MT tools in less important content (i.e., content that does not require translation precision and extensive copywriting). But when it comes to business documents or audience-facing content, there is nothing can beat human translation.
Visit https://www.morningstartranslation.com/ to learn more.
6 notes ¡ View notes